# Multi-scale Training
Open Reasoner Zero 32B
MIT
The first open-source implementation of large-scale reasoning-oriented reinforcement learning focusing on scalability, simplicity, and ease of use
Large Language Model
Transformers

O
Open-Reasoner-Zero
498
29
Longalpaca 70B
LongLoRA is an efficient fine-tuning technique for large language models with long context processing capabilities, achieving this through shifted short attention mechanisms, supporting context lengths from 8k to 100k.
Large Language Model
Transformers

L
Yukang
1,293
21
Mobilevit Xx Small
Other
MobileViT is a lightweight, low-latency vision Transformer model that combines the strengths of CNNs and Transformers, making it suitable for mobile devices.
Image Classification
Transformers

M
apple
6,077
16
Mobilevit X Small
Other
MobileViT is a lightweight, low-latency vision Transformer model that combines the advantages of CNNs and Transformers, making it suitable for mobile devices.
Image Classification
Transformers

M
apple
1,062
6
Mobilevit Small
Other
MobileViT is a lightweight, low-latency vision Transformer model that combines the strengths of CNNs and Transformers, making it suitable for mobile devices.
Image Classification
Transformers

M
apple
894.23k
65
Mobilevit Small
Other
MobileViT is a lightweight, low-latency vision Transformer model that combines the advantages of CNNs and Transformers, suitable for mobile devices.
Image Classification
Transformers

M
Matthijs
39
0
Featured Recommended AI Models